32 - Recap Clip 6.6: Hidden Markov Models (Part 1) [ID:30435]
50 von 112 angezeigt

Great. So let's recap what you should have done last week, I assume, in Markov models.

So the basic setting, which will kind of be a continuous theme for the next couple of

lectures, is we have some discrete variable X, which is basically just a representation

of a bunch of states or something that we're interested in, and we can make observations.

We can describe the transition model between the different states as a single matrix, basically,

which just encodes the probabilities of getting from one state to the next one.

The classic example you've been doing with this kind of security guard in some bunker,

I think, whose only observation is basically whether the administrator, I think, has an

umbrella or not and tries to guess whether it's raining outside.

It's a bunker, so there are no windows.

But taking that as an example, you get this matrix here with like O7 from the case of

that it actually matches the actual weather outside, whether he has an umbrella or not,

and 0.3 in the other cases.

We can encode all of this as matrices, which is nice because matrices give us all the tools

that linear algebra offers, which is nice because those tend to be rather efficient.

So we take one giant matrix for a transition model.

We take one matrix for our sensors, in this case just the observation whether there is

an umbrella or no.

And yeah, that gives us these two equations, one for filtering forward, one for smoothing

backward, which now is only basically matrix algebra, which is nice.

And if you try to take the classic forward-backward algorithm for this, you get basically quadratic

time and linear space and linear with respect to S and linearity with respect to T in both

cases.

I'm going to rush over this a bit because you've done all of this last week, I guess.

Yes?

Could you turn off the lights?

Oh, yeah, if I can figure out how.

This one maybe.

Is that better?

No, they're still on.

Ah, there are more buttons.

Did that do something?

Well, the lights are off, but it didn't do anything.

Okay.

I don't know.

We can't dim that, right?

Yes, I think there are two buttons.

Huh.

No.

That's not it.

Those are the only buttons available, though.

So yeah, sorry about that.

Right.

So let's look at an example.

Assume you're all working for NASA for some reason, sitting inside their control center

in wherever they're actually located, and you're responsible for, let's say, monitoring

the Curiosity rover.

Let's assume this robot, the rover only has four actual sensors, which only tells them

if there is an obstacle in the north, south, east, or west direction.

Whatever you're reading here, just mentally substitute E by W. I think Professor Kolhase

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:08:14 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 10:56:44

Sprache

en-US

Recap: Hidden Markov Models (Part 1)

Main video on the topic in chapter 6 clip 6.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen